Minimax Lower Bound for Passive Convex Optimization
نویسنده
چکیده
Convex optimization also determines the next estimate based on the observation at previous points. In this sense, optimization has a close connection with active learning. An interesting question is: what if we become “passive” in optimization? That is, how important is adaptivity in optimization? Understanding this kind of problems becomes crucial in settings where observing a single point takes significant amount of time (e.g.drilling the ground in search of oil, performing daylong experiments on cells to find optimal pH). It might also provide intuition for analyzing distributed algorithms where multiple points are simultaneously sampled at every iteration.
منابع مشابه
Unifying Stochastic Convex Optimization and Active Learning
First order stochastic convex optimization is an extremely well-studied area with a rich history of over a century of optimization research. Active learning is a relatively newer discipline that grew independently of the former, gaining popularity in the learning community over the last few decades due to its promising improvements over passive learning. Over the last year, we have uncovered co...
متن کاملTowards Minimax Online Learning with Unknown Time Horizon
We consider online learning when the time horizon is unknown. We apply a minimax analysis, beginning with the fixed horizon case, and then moving on to two unknown-horizon settings, one that assumes the horizon is chosen randomly according to some known distribution, and the other which allows the adversary full control over the horizon. For the random horizon setting with restricted losses, we...
متن کاملMulti-scale exploration of convex functions and bandit convex optimization
We construct a new map from a convex function to a distribution on its domain, with the property that this distribution is a multi-scale exploration of the function. We use this map to solve a decadeold open problem in adversarial bandit convex optimization by showing that the minimax regret for this problem is Õ(poly(n) √ T ), where n is the dimension and T the number of rounds. This bound is ...
متن کاملMinimax Probability
When constructing a classiier, the probability of correct classii-cation of future data points should be maximized. In the current paper this desideratum is translated in a very direct way into an optimization problem, which is solved using methods from convex optimization. We also show how to exploit Mercer kernels in this setting to obtain nonlinear decision boundaries. A worst-case bound on ...
متن کاملSimple algorithm for computing the communication complexity of quantum communication processes
A two-party quantum communication process with classical inputs and outcomes can be simulated by replacing the quantum channel with a classical one. The minimal amount of classical communication required to reproduce the statistics of the quantum process is called its communication complexity. In the case of many instances simulated in parallel, the minimal communication cost per instance is ca...
متن کامل